`

Advanced Practical GenAI (Arabic)

1500.00 EGP


This is part 3 of the Practical GenAI Sequel. The objective of the sequel is to prepare you to be a professional GenAI engineer/developer. I will take you from the ground-up in the realm of LLMs and GenAI, starting from the very basics to building working and production level apps. The spirit of the sequel is to be “hands-on”. All examples are code-based, with final projects, built step-by-step either in python, Google Colab, and deployed in streamlit. By the end of the courses sequel, you will have built chatgpt clone, Midjourney clone, Chat with your data app, Youtube assistant app, Ask YouTube Video, Study Mate App, Recommender system, Image Description App with GPT-V, Image Generation app with DALL-E and StableDiffusion, Video commentator app using Whisper and others. In this part you will work with different kinds of LLMs, being opensource or not. You will get exposed to GPT models by OpenAI, Llama models by Meta, Gemini and Bard by Google, Orca by Microsoft, Mixtral by Mistral AI and others. You will use pre-trained models, and also finetune them on your own data. We will learn about huggingface and use it for model finetuining using the Parameter Efficient Training or PEFT models. We will use Low-Rank Adaptation or LoRA for efficient training. You will learn how to deploy a model in the cloud, or privately host it for privacy concerns of your company data. You will learn how to use existing pre-trained models as teachers and use model distillation to train your custom version of the model.

    Pre-requisities

  • Python

  • NLP

  • Transformers

  • Generative AI Foundations

    Topics Covered

  • Huggingface Transformers Review

  • OpenSource LLMs: Llama 2, Mixtral and others

  • Privately hosted LLMs: GPT4all Clone with streamlit

  • Small Language Models (SLM)

  • LLM fine tuning with Huggningface using QLoRA

  • LLM finetuning with RLHF

  • GPT-3 Finetuning

  • LLM finetuning with Distillation

    What you will learn

  • Build excellent knowledge of the underlying mechanics of transformers, LLMs

  • Go through the full training cycle of LLMs

  • Work with opensource LLMs

  • Work with privately hosted models

  • Fine tune pre-trained models with your own data